62 research outputs found

    Do Deep Neural Networks Model Nonlinear Compositionality in the Neural Representation of Human-Object Interactions?

    Full text link
    Visual scene understanding often requires the processing of human-object interactions. Here we seek to explore if and how well Deep Neural Network (DNN) models capture features similar to the brain's representation of humans, objects, and their interactions. We investigate brain regions which process human-, object-, or interaction-specific information, and establish correspondences between them and DNN features. Our results suggest that we can infer the selectivity of these regions to particular visual stimuli using DNN representations. We also map features from the DNN to the regions, thus linking the DNN representations to those found in specific parts of the visual cortex. In particular, our results suggest that a typical DNN representation contains encoding of compositional information for human-object interactions which goes beyond a linear combination of the encodings for the two components, thus suggesting that DNNs may be able to model this important property of biological vision.Comment: 4 pages, 2 figures; presented at CCN 201

    Role of expectation and working memory constraints in Hindi comprehension: An eyetracking corpus analysis

    Get PDF
    We used the Potsdam-Allahabad Hindi eye-tracking corpus to investigate the role of word-level and sentence-level factors during sentence comprehension in Hindi. Extending previous work that used this eye-tracking data, we investigate the role of surprisal and retrieval cost metrics during sentence processing. While controlling for word-level predictors (word complexity, syllable length, unigram and bigram frequencies) as well as sentence-level predictors such as integration and storage costs, we find a significant effect of surprisal on first-pass reading times (higher surprisal value leads to increase in FPRT). Effect of retrieval cost was only found for a higher degree of parser parallelism. Interestingly, while surprisal has a significant effect on FPRT, storage cost (another prediction-based metric) does not. A significant effect of storage cost shows up only in total fixation time (TFT), thus indicating that these two measures perhaps capture different aspects of prediction. The study replicates previous findings that both prediction-based and memory-based metrics are required to account for processing patterns during sentence comprehension. The results also show that parser model assumptions are critical in order to draw generalizations about the utility of a metric (e.g. surprisal) across various phenomena in a language

    Can RNNs trained on harder subject-verb agreement instances still perform well on easier ones?

    Get PDF
    The main subject and the associated verb in English must agree in grammatical number as per the Subject-Verb Agreement (SVA) phenomenon. It has been found that the presence of a noun between the verb and the main subject, whose grammatical number is opposite to that of the main subject, can cause speakers to produce a verb that agrees with the intervening noun rather than the main noun; the former thus acts as an agreement attractor. Such attractors have also been shown to pose a challenge for RNN models without explicit hierarchical bias to perform well on SVA tasks. Previous work suggests that syntactic cues in the input can aid such models to choose hierarchical rules over linear rules for number agreement. In this work, we investigate the effects of the choice of training data, training algorithm, and architecture on hierarchical generalization. We observe that the models under consideration fail to perform well on sentences with no agreement attractor when trained solely on natural sentences with at least one attractor. Even in the presence of this biased training set, implicit hierarchical bias in the architecture (as in the Ordered Neurons LSTM) is not enough to capture syntax-sensitive dependencies. These results suggest that current RNNs do not capture the underlying hierarchical rules of natural language, but rather use shallower heuristics for their predictions

    Search-time Efficient Device Constraints-Aware Neural Architecture Search

    Full text link
    Edge computing aims to enable edge devices, such as IoT devices, to process data locally instead of relying on the cloud. However, deep learning techniques like computer vision and natural language processing can be computationally expensive and memory-intensive. Creating manual architectures specialized for each device is infeasible due to their varying memory and computational constraints. To address these concerns, we automate the construction of task-specific deep learning architectures optimized for device constraints through Neural Architecture Search (NAS). We present DCA-NAS, a principled method of fast neural network architecture search that incorporates edge-device constraints such as model size and floating-point operations. It incorporates weight sharing and channel bottleneck techniques to speed up the search time. Based on our experiments, we see that DCA-NAS outperforms manual architectures for similar sized models and is comparable to popular mobile architectures on various image classification datasets like CIFAR-10, CIFAR-100, and Imagenet-1k. Experiments with search spaces -- DARTS and NAS-Bench-201 show the generalization capabilities of DCA-NAS. On further evaluating our approach on Hardware-NAS-Bench, device-specific architectures with low inference latency and state-of-the-art performance were discovered.Comment: Accepted to 10th International Conference on Pattern Recognition and Machine Intelligence (PReMI) 202

    PREVALENCE AND RISK FACTORS OF ESSENTIAL HYPERTENSION AND NEW ONSET OF DIABETES IN ESSENTIAL HYPERTENSION IN RURAL POPULATION OF HARYANA

    Get PDF
    Objective: We conducted a well-designed prevalence study in a rural population of Haryana in Mullana rural area to find out the latest prevalence of essential hypertension, the prescription pattern of antihypertensive drugs and the associated risk of new onset of diabetes.Methods: A retrospective study was carried out on the patient data (2672 patients) from the years 2009 to 2013 at OPD of M. M. University hospital, Mullana to find the previous year's prevalence of different diseases, including essential hypertension, new onset of diabetes and associated risk factors, prescription pattern of antihypertensive drug therapy. Based on the above results, a prospective study was conducted from January 2015 to December 2016 and total 510 patients (270 essential hypertension and 240 essential hypertension with new onset of diabetes) and 270 normal individuals were recruited in the study.Results: The retrospective study, a total of 2672 patients' data was evaluated which showed 41.21% prevalence of essential hypertension, 11.83% new onset of diabetes in Essential hypertension patients and 15.87% diabetic patients. Antihypertensive monotherapy was prescribed to 59.85% patients and combination therapy to 40.15% patients while that of a prospective study showed 40.37% patients of monotherapy and 59.63% patients of combination therapy. The prospective study also showed that different anthropometric parameters were significantly associated with risk of hypertension and new onset of diabetes except for age and height.Conclusion: An increase in the prevalence of essential hypertension and associated risk factors was observed when compared with previous studies and retrospective study. It is clearly seen by the change in drug therapy pattern and different anthropometric parameters. Implementation of a large scale awareness program is needed to combat these metabolic diseases
    • …
    corecore